Here's a claim that's gotten more interesting lately: the most important thing in software design isn't the logic, the algorithms, or even the architecture. It's the interfaces. The contracts. The agreements between parts of your system about how they'll talk to each other.
Everything else — the business logic, the database queries, the clever optimizations — is just filling in the gaps. If your interfaces are right, the rest is a walk in the park. If they're wrong, no amount of brilliant implementation will save you.
I used to think this was just elegant theory. Then I started working with coding assistants, and suddenly it became visceral. When a machine tries to use your API, it doesn't have the context you had when you built it. It doesn't read the documentation and think, "Oh, I see what they meant." It only has the contract. And if that contract is unclear, overloaded, or makes implicit assumptions about human intuition? The AI fails in ways that are both predictable and illuminating.
Your interfaces are about to get stress-tested by consumers that can't guess what you meant.
What We Mean by "Interface"
Not a user interface. Not a screen with buttons. An interface in the programming sense: a contract that one piece of code presents to the world. It says, "Here's what I need, here's what I promise to give back, and I don't care who you are as long as you play by these rules."
There's something in this definition that's easy to miss but becomes crucial when you're working with AI agents: contracts are inherently one-directional in their authority. One party sets the terms; the other complies. You don't negotiate with a function signature. You either satisfy it or you don't.
This asymmetry is both the power and the fragility of interfaces. They impose order, but they impose someone's order — someone who had to make assumptions about the future, about context, about what "obvious" means. Humans can read between the lines. AI agents can't. They expose every hidden assumption in your design.
The SOLID Dream (And Its Limits)
The software engineering literature is full of love for interfaces. The SOLID principles — that collection of design commandments that appears in every senior developer interview — lean heavily on them. It's worth remembering that SOLID was never intended to be a religion; these are heuristics, not laws. The skill lies in knowing when and how to apply them. But several of them become more interesting in the context of AI.
The Dependency Inversion Principle says depend on abstractions, not concretions. The Interface Segregation Principle says keep your interfaces narrow and focused: many small contracts rather than one bloated one. And the Liskov Substitution Principle says any implementation should be swappable for another without breaking the system — which only works if the contract is honest about what it promises.
And in theory? This is elegant. In practice, I've spent years creating interfaces "just in case" that never got a second implementation. One interface, one concrete class, and that extra layer of indirection just sitting there because the books said so.
But here's what's changed: AI agents are making polymorphism relevant again. When you're working with coding assistants that need to understand and extend your codebase, those clean abstractions become genuinely valuable. An AI can reason about your repository interface much more effectively than it can reason about a specific MongoDB implementation with business logic scattered throughout. And Liskov's principle suddenly has a new consumer: an AI agent that will take your contract at face value and expect any implementation to behave consistently.
The Interface Segregation Principle deserves special attention here. A mega-interface that demands seventeen parameters and handles five different concerns? Good luck explaining that to an AI in a prompt. Better to have several narrow interfaces that compose cleanly. But then your implementations need to segregate too, and that's not always clean.
Nothing about this is clean. But AI assistance makes the messiness more visible.
Ports and Adapters: More Relevant Than Ever
Hexagonal architecture — ports and adapters — was proposed by Alistair Cockburn back in 2005. The core idea is seductive: define your application entirely through its ports (interfaces), and let adapters handle the messy reality of databases, UIs, and external services. Your business logic sits in the center, blissfully ignorant of the outside world, communicating only through clean contracts.
I used to think this was over-engineered for most applications. Then I started watching AI agents try to integrate with various systems, and I changed my mind.
When an AI needs to connect to your application, those clean ports become incredibly valuable. The agent doesn't need to understand your entire architecture — it just needs to understand the contract. A well-designed port tells the story: "Send me data that looks like this, and I'll give you data that looks like that." No hidden dependencies, no magical global state, no assumptions about how you're "supposed" to use it.
The adapters handle the translation between the clean world of contracts and the messy world of reality. Your AI integration code talks to the port. The adapter worries about authentication headers, retry logic, and the fact that your database schema has thirty years of accumulated crud.
When it works, it's genuinely elegant. You can swap components, test in isolation, and — crucially — explain the system to an AI without drowning it in implementation details.
Of course, when Cockburn makes it sound trivial ("Just define the interfaces, then fill in the blanks"), something's about to go sideways.
The Async Surprise (And Other Contract Killers)
Here's a story that's become more common as teams try to build AI-friendly architectures from scratch.
You're scaffolding a new service. You start simple — an in-memory database, maybe SQLite. You define your repository interface: `save(entity)`, `find_by_id(id)`, the usual suspects. Everything is synchronous, everything is clean. Your AI assistant can understand it perfectly and generate correct implementations.
Then you move to PostgreSQL. And suddenly: it's asynchronous. Where your in-memory implementation returned a value, the real one returns a promise, a future, a coroutine — some flavor of "I'll get back to you."
This isn't a minor detail. It fundamentally changes the contract. Every caller that depended on getting an immediate response now needs to handle async flow. Your beautiful interface doesn't just need updating — it needs rethinking. Everything built on top of it cascades.
And your AI assistant? It's confused. The contract said one thing, but the implementation does another. The generated code doesn't compile, or worse, it compiles but has subtle race conditions because the AI assumed synchronous behavior.
A small change — synchronous to asynchronous — blows up the whole design.
The lesson isn't that hexagonal architecture is wrong. It's that designing contracts without knowing the future is the fundamental challenge of software design, and no amount of architectural purity eliminates it. Contracts are like legal documents: hard to draft, expensive to change, and everything depends on them.
But here's what's interesting: AI tools are making this problem more visible and forcing us to be more honest about it. When your coding assistant generates broken code because your contract was ambiguous, you get immediate feedback on the quality of your interface design. It's like having a very literal intern who exposes every gap in your specifications.
Abstraction Is Not Extraction
This distinction has become crucial as AI makes it easier to spot and eliminate duplication. Let me be clear about the difference:
Extraction is when you take a piece of code that's duplicated in two places and pull it out into its own function or module. This is refactoring. It reduces duplication. AI assistants are excellent at this — they can spot patterns across your codebase and suggest extractions that would take you hours to find manually.
Abstraction is when you move up a level — from specific implementations to a generic contract that can encompass multiple implementations. It's not about removing duplication; it's about discovering the common shape underneath different concrete things.
The difference matters because extraction is relatively safe, and abstraction is dangerous. When you extract, you're working with known code. When you abstract, you're making a bet about the future: "I believe these things are similar enough that a single contract can serve them all."
AI assistants can help with both, but they're much better at the first than the second. They can analyze existing code and pull out commonalities. But they can't predict how your domain will evolve or whether two similar-looking things should actually be unified.
That judgment? Still yours.
The Notification Service Trap
Let me show you where this goes wrong. Your company uses five different notification channels: Slack, email, SMS, push notifications, and webhooks. Five teams built five different implementations. Someone says, "Let's unify this. One notification service to rule them all."
You fire up your AI assistant and ask it to design a generic interface. It suggests:
interface NotificationService {
notify(message: string, recipient: string): Promise<void>
}
Beautiful. Clean. Generic. And completely wrong.
Because Slack needs channel names and thread IDs. Email needs subject lines, HTML bodies, and CC recipients. SMS has character limits. Push notifications need device tokens. Webhooks need URLs and authentication headers.
The AI generated syntactically correct code, but it made the classic abstraction mistake: it unified things that look similar but are fundamentally different. Your "message" parameter is doing wildly different work for each channel.
You can try to fix this with optional parameters or configuration objects, but now your "simple" interface is sprouting complexity. You're not abstracting; you're cramming reality into a shape it doesn't fit.
This is where the human insight becomes crucial. An AI can generate the abstraction, but it takes human domain knowledge to recognize when the abstraction is wrong. The machine sees the syntactic similarity. You understand the semantic differences.
DRY: When AI Makes It Worse
AI coding assistants are amazing at spotting duplication. They'll eagerly suggest extracting any repeated code into shared functions. This sounds helpful until you realize they're amplifying one of programming's most dangerous patterns.
DRY — Don't Repeat Yourself — is excellent advice when two pieces of code are genuinely the same thing. But it becomes a trap when two pieces of code look the same but are different things. When they happen to have the same shape today but exist for different reasons and will evolve in different directions.
Sandi Metz captured this perfectly: "Duplication is far cheaper than the wrong abstraction." The pattern is painfully recognizable:
1. AI spots duplication and suggests a shared function
2. You accept the suggestion because it looks clean
3. New requirement arrives that almost fits the abstraction
4. You add parameters and conditionals to accommodate
5. The abstraction accumulates flags and branching logic
6. Eventually, the "shared" code is more complex than the duplication it replaced
The root cause is treating similarity as sameness. Two functions that both "send a notification" might look identical to an AI, but if one alerts engineers on Slack and the other texts customers, they're fundamentally different operations that happen to share a verb.
Here's the thing: AI assistants are getting better at understanding context, but they still can't predict how your domain will evolve. They see the code as it is, not the forces that will change it. That's human territory.
The Agent-Friendly Interface
Working with AI has taught me something interesting about interface design. The best interfaces for AI consumption are also the best interfaces for humans — they're just more obviously better when a machine tries to use them.
An AI agent doesn't have the luxury of reading documentation and inferring intent. It can't ask clarifying questions in Slack or remember conversations about edge cases. It only has the contract. So the contract needs to be complete.
This means:
- Explicit over implicit. Don't rely on "obvious" defaults or conventions that live in tribal knowledge.
- Types over comments. The interface signature should tell the story, not the documentation.
- Composition over configuration. Instead of one complex interface with many modes, build several focused interfaces that compose clearly.
- Pure functions over stateful objects. State makes everything harder to reason about, whether you're human or machine.
I've started thinking of this as "agent-friendly" design. It's not that you're optimizing for AI — you're optimizing for clarity and explicitness, which happens to make AI integration much smoother.
The hexagonal architecture pattern becomes more appealing through this lens. Those clean ports aren't just good for testing and modularity; they're perfect integration points for AI agents that need to understand your system without drowning in implementation details.
So What Do We Actually Do?
If contracts are essential but impossible to get right, if AI amplifies both our good and bad abstractions, if the future is unknowable — what's the practical advice?
Start concrete, abstract later. Build the specific thing first. Understand its actual shape, requirements, and quirks. AI can help you build faster, but it can't help you understand your domain. Only when you have multiple concrete implementations can you see real commonality — not the imagined kind.
Use AI for exploration, not decisions. Let your coding assistant generate multiple interface designs. Let it show you different approaches. But you decide which ones actually make sense for your domain.
Design for explicit contracts. The rise of AI consumers makes interface clarity more important than ever. If an AI agent can't understand your interface without extensive context, humans probably struggle with it too.
Prefer duplication over the wrong abstraction. This is more important with AI assistance because it's so easy to generate plausible-looking but ultimately harmful abstractions. Duplication is visible and easy to fix. Wrong abstractions are invisible and expensive to unwind.
Treat interface changes as expensive. Every consumer of your interface is affected. This includes AI agents that have learned to use your API. Design for today's requirements, but accept that contracts evolve. Plan for change rather than trying to prevent it.
Remember that interfaces are the real architecture. Everything else is implementation details. When you review a design, look at the contracts first. When you evaluate an AI-generated solution, ask how it affects the interfaces. When you debate architecture, you're really debating contracts — whether you realize it or not.
The interfaces are where the design lives. The rest is just filling in the blanks. And in a world where machines are increasingly capable of filling those blanks, getting the interfaces right becomes the most human skill of all.